69 research outputs found

    deForm: An interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch

    Get PDF
    We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications

    CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs

    Get PDF
    Computer Graphics (CG) has achieved a high level of realism, producing strikingly vivid images. This realism, however, comes at the cost of long and often expensive manual modeling, and most often humans can still distinguish between CG images and real images. We present a novel method to make CG images look more realistic that is simple and accessible to novice users. Our system uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a novel mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our improved CG images appear more realistic than the originals

    Microgeometry capture using an elastomeric sensor

    Get PDF
    We describe a system for capturing microscopic surface geometry. The system extends the retrographic sensor [Johnson and Adelson 2009] to the microscopic domain, demonstrating spatial resolution as small as 2 microns. In contrast to existing microgeometry capture techniques, the system is not affected by the optical characteristics of the surface being measured---it captures the same geometry whether the object is matte, glossy, or transparent. In addition, the hardware design allows for a variety of form factors, including a hand-held device that can be used to capture high-resolution surface geometry in the field. We achieve these results with a combination of improved sensor materials, illumination design, and reconstruction algorithm, as compared to the original sensor of Johnson and Adelson [2009].National Science Foundation (U.S.) (Grant 0739255)National Institutes of Health (U.S.) (Contract 1-R01-EY019292-01

    Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components

    Full text link
    In manufacturing processes, surface inspection is a key requirement for quality assessment and damage localization. Due to this, automated surface anomaly detection has become a promising area of research in various industrial inspection systems. A particular challenge in industries with large-scale components, like aircraft and heavy machinery, is inspecting large parts with very small defect dimensions. Moreover, these parts can be of curved shapes. To address this challenge, we present a 2-stage multi-modal inspection pipeline with visual and tactile sensing. Our approach combines the best of both visual and tactile sensing by identifying and localizing defects using a global view (vision) and using the localized area for tactile scanning for identifying remaining defects. To benchmark our approach, we propose a novel real-world dataset with multiple metallic defect types per image, collected in the production environments on real aerospace manufacturing parts, as well as online robot experiments in two environments. Our approach is able to identify 85% defects using Stage I and identify 100% defects after Stage II. The dataset is publicly available at https://zenodo.org/record/8327713Comment: This is a pre-print for International Conference on Intelligent Robots and Systems 2023 publicatio

    Video face replacement

    Get PDF
    We present a method for replacing facial performances in video. Our approach accounts for differences in identity, visual appearance, speech, and timing between source and target videos. Unlike prior work, it does not require substantial manual operation or complex acquisition hardware, only single-camera video. We use a 3D multilinear model to track the facial performance in both videos. Using the corresponding 3D geometry, we warp the source to the target face and retime the source to match the target performance. We then compute an optimal seam through the video volume that maintains temporal consistency in the final composite. We showcase the use of our method on a variety of examples and present the result of a user study that suggests our results are difficult to distinguish from real video footage.National Science Foundation (U.S.) (Grant PHY-0835713)National Science Foundation (U.S.) (Grant DMS-0739255

    Hearing Loss in Stranded Odontocete Dolphins and Whales

    Get PDF
    The causes of dolphin and whale stranding can often be difficult to determine. Because toothed whales rely on echolocation for orientation and feeding, hearing deficits could lead to stranding. We report on the results of auditory evoked potential measurements from eight species of odontocete cetaceans that were found stranded or severely entangled in fishing gear during the period 2004 through 2009. Approximately 57% of the bottlenose dolphins and 36% of the rough-toothed dolphins had significant hearing deficits with a reduction in sensitivity equivalent to severe (70–90 dB) or profound (>90 dB) hearing loss in humans. The only stranded short-finned pilot whale examined had profound hearing loss. No impairments were detected in seven Risso's dolphins from three different stranding events, two pygmy killer whales, one Atlantic spotted dolphin, one spinner dolphin, or a juvenile Gervais' beaked whale. Hearing impairment could play a significant role in some cetacean stranding events, and the hearing of all cetaceans in rehabilitation should be tested
    • …
    corecore